Concerning the quantization error when the local features were quantified by the visual vocabulary in traditional Bag-of-Visual-Word (BoVW) model, an image retrieval model based on visual vocabulary with weighted feature space information was proposed. Considered the clustering method which was used to generate the visual codebook, the statistic information of the feature space was analyzed during the clustering process. Through the comparison of different weighting methods by experiments, the best weighting method, mean weighted average, was found to weight the visual words to improve the descriptive ability of the codebook. The experiment on ImageNet dataset shows that, compared to homologous visual codebook, non-homologous visual codebook has less impact on dividing the visual space, and the effects of the weighted feature space based visual codebook on big dataset are better.